The other night I attended a press dinner hosted by an enterprise company called Box. Other guests included the leaders of two data-oriented companies, Datadog and MongoDB. Usually the executives at these soirees are on their best behavior, especially when the discussion is on the record, like this one. So I was startled by an exchange with Box CEO Aaron Levie, who told us he had a hard stop at dessert because he was flying that night to Washington, DC. He was headed to a special-interest-thon called TechNet Day, where Silicon Valley gets to speed-date with dozens of Congress critters to shape what the (uninvited) public will have to live with. And what did he want from that legislation? âAs little as possible,â Levie replied. âI will be single-handedly responsible for stopping the government.â
He was joking about that. Sort of. He went on to say that while regulating clear abuses of AI like deepfakes makes sense, itâs way too early to consider restraints like forcing companies to submit large language models to government-approved AI cops, or scanning chatbots for things like bias or the ability to hack real-life infrastructure. He pointed to Europe, which has already adopted restraints on AI as an example of what not to do. âWhat Europe is doing is quite risky,â he said. âThere’s this view in the EU that if you regulate first, you kind of create an atmosphere of innovation,â Levie said. âThat empirically has been proven wrong.â
Levieâs remarks fly in the face of what has become a standard position among Silicon Valleyâs AI elites like Sam Altman. âYes, regulate us!â they say. But Levie notes that when it comes to exactly what the laws should say, the consensus falls apart. âWe as a tech industry do not know what we’re actually asking for,â Levie said, âI have not been to a dinner with more than five AI people where there’s a single agreement on how you would regulate AI.â Not that it mattersâLevie thinks that dreams of a sweeping AI bill are doomed. âThe good news is there’s no way the US would ever be coordinated in this kind of way. There simply will not be an AI Act in the US.â
Levie is known for his irreverent loquaciousness. But in this case heâs simply more candid than many of his colleagues, whose regulate-us-please position is a form of sophisticated rope-a-dope. The single public event of TechNet Day, at least as far as I could discern, was a livestreamed panel discussion about AI innovation that included Googleâs president of global affairs Kent Walker and Michael Kratsios, the most recent US Chief Technology Officer and now an executive at Scale AI. The feeling among those panelists was that the government should focus on protecting US leadership in the field. While conceding that the technology has its risks, they argued that existing laws pretty much cover the potential nastiness.
Googleâs Walker seemed particularly alarmed that some states were developing AI legislation on their own. âIn California alone, there are 53 different AI bills pending in the legislature today,â he said, and he wasnât boasting. Walker of course knows that this Congress can hardly keep the government itself afloat, and the prospect of both houses successfully juggling this hot potato in an election year is as remote as Google rehiring the eight authors of the transformer paper.
The US Congress does have legislation pending. And the bills keep comingâsome perhaps less meaningful than others. This week, Representative Adam Schiff, a California Democrat, introduced a bill called the Generative AI Copyright Disclosure Act of 2024. It mandates that large language models must present to the copyright office âa sufficiently detailed summary of any copyrighted works used ⦠in the training data set.â Itâs not clear what âsufficiently detailedâ means. Would it be OK to say âWe simply scraped the open web?â Schiffâs staff explained to me that they were adopting a measure in the EUâs AI bill.
+ There are no comments
Add yours